Close

%0 Conference Proceedings
%4 sid.inpe.br/sibgrapi/2018/09.10.20.04
%2 sid.inpe.br/sibgrapi/2018/09.10.20.04.03
%@doi 10.1109/SIBGRAPI.2018.00019
%T Multimodal Human Action Recognition Based on a Fusion of Dynamic Images using CNN descriptors
%D 2018
%A Cardenas, Edwin Jonathan Escobedo,
%@affiliation Federal University of Ouro Preto
%E Ross, Arun,
%E Gastal, Eduardo S. L.,
%E Jorge, Joaquim A.,
%E Queiroz, Ricardo L. de,
%E Minetto, Rodrigo,
%E Sarkar, Sudeep,
%E Papa, João Paulo,
%E Oliveira, Manuel M.,
%E Arbeláez, Pablo,
%E Mery, Domingo,
%E Oliveira, Maria Cristina Ferreira de,
%E Spina, Thiago Vallin,
%E Mendes, Caroline Mazetto,
%E Costa, Henrique Sérgio Gutierrez,
%E Mejail, Marta Estela,
%E Geus, Klaus de,
%E Scheer, Sergio,
%B Conference on Graphics, Patterns and Images, 31 (SIBGRAPI)
%C Foz do Iguaçu, PR, Brazil
%8 29 Oct.-1 Nov. 2018
%I IEEE Computer Society
%J Los Alamitos
%S Proceedings
%K action recognition, dynamic images, RGB-D data, kinect, CNN.
%X In this paper, we propose the use of dynamic-images-based approach for action recognition. Specifically, we exploit the multimodal information recorded by a Kinect sensor (RGB-D and skeleton joint data). We combine several ideas from rank pooling and skeleton optical spectra to generate dynamic images to summarize an action sequence into single flow images. We group our dynamic images into five groups: a dynamic color group (DC); a dynamic depth group (DD) and three dynamic skeleton groups (DXY, DYZ, DXZ). As action is composed of different postures along time, we generated N different dynamic images with the main postures for each dynamic group. Next, we applied a pre-trained flow-CNN to extract spatiotemporal features with a max-mean aggregation. The proposed method was evaluated on a public benchmark dataset, the UTD-MHAD, and achieved the state-of-the-art result.
%@language en
%3 Multimodal_Human_Action_Recognition_Based_on_a_Fusion_of_Dynamic_Images_using_CNN_descriptors.pdf


Close